Skip to content

Conversation

adivate2021
Copy link
Contributor

@adivate2021 adivate2021 commented Sep 17, 2025

@adivate2021 adivate2021 marked this pull request as draft September 17, 2025 19:17
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @adivate2021, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a comprehensive system for managing prompts within the application. It establishes a new Prompt dataclass for structured prompt handling, extends the API with dedicated endpoints for prompt insertion and retrieval, and provides corresponding client-side methods and data models. This enhancement streamlines the process of defining, storing, and utilizing dynamic prompt templates.

Highlights

  • New Prompt Dataclass: Introduced a new Prompt dataclass in src/judgeval/prompts/prompt.py to encapsulate prompt definition, storage, and compilation logic. This class includes create and get class methods for interacting with the API, and a compile method for variable substitution.
  • API Endpoint Additions: Added new API endpoints /prompts/insert/ and /prompts/fetch/ to handle the creation and retrieval of prompts. These endpoints are integrated into the API generation scripts.
  • API Client Functionality: Implemented prompts_insert and prompts_fetch methods within both the synchronous and asynchronous API clients (JudgmentSyncClient and JudgmentAsyncClient) to facilitate interaction with the new prompt management endpoints.
  • Data Model Definitions: Defined new TypedDict and Pydantic BaseModel classes (PromptInsertRequest, PromptInsertResponse, PromptFetchResponse) to standardize the data structures for prompt-related API requests and responses.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a Prompt dataclass along with methods to create, fetch, and compile prompts, and integrates them into the API. The changes are generally well-structured. However, I've identified a couple of critical issues in src/judgeval/prompts/prompt.py related to unsafe dictionary access for optional fields, which could lead to KeyError exceptions. Additionally, there are a few medium-severity issues concerning code style, a missing type hint, and a potential regression in error handling in prompt_scorer.py that could affect user experience and code consistency. I've provided detailed comments and suggestions for each of these points.

r = client.prompts_insert(
payload={"name": name, "prompt": prompt, "tags": tags}
)
return r["commit_id"], r["parent_commit_id"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The parent_commit_id key is not guaranteed to be in the response dictionary r as it is marked as NotRequired in the PromptInsertResponse type definition. Accessing it directly with r["parent_commit_id"] will raise a KeyError if the key is absent. You should use r.get("parent_commit_id") for safe access.

Suggested change
return r["commit_id"], r["parent_commit_id"]
return r["commit_id"], r.get("parent_commit_id")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

key should always be there but added anyways

prompt=prompt_config["prompt"],
tags=prompt_config["tags"],
commit_id=prompt_config["commit_id"],
parent_commit_id=prompt_config["parent_commit_id"],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The parent_commit_id key is not guaranteed to be in the prompt_config dictionary as it is marked as NotRequired in the PromptFetchResponse type definition. Accessing it directly with prompt_config["parent_commit_id"] will raise a KeyError if the key is absent. You should use prompt_config.get("parent_commit_id") for safe access.

Suggested change
parent_commit_id=prompt_config["parent_commit_id"],
parent_commit_id=prompt_config.get("parent_commit_id"),

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

key should always be present (may return none), but added anyways

tag: Optional[str] = None,
judgment_api_key: str = os.getenv("JUDGMENT_API_KEY") or "",
organization_id: str = os.getenv("JUDGMENT_ORG_ID") or "",
):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The function fetch_prompt is missing a return type hint. Based on its usage and the prompts_fetch method it calls, the return type should be PromptFetchResponse. Adding type hints improves code clarity and allows for better static analysis. You will also need to import PromptFetchResponse from judgeval.api.api_types.

Suggested change
):
) -> "PromptFetchResponse":

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added

Comment on lines 65 to 66
if not tags:
tags = []
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While if not tags: works, it's more idiomatic and explicit in Python to check for None with if tags is None:. This avoids potential confusion if an empty list is passed intentionally and you wanted to treat it differently from None (though in this case the outcome is the same).

Suggested change
if not tags:
tags = []
if tags is None:
tags = []

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed

@justinsheu justinsheu marked this pull request as ready for review October 12, 2025 05:07
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Greptile Overview

Greptile Summary

This PR introduces a comprehensive prompt management system to the JudgmentLabs platform. The implementation adds a new Prompt dataclass with version control capabilities, template compilation, and CRUD operations. The changes span multiple layers of the system: API endpoint definitions in scripts (api_generator.py, openapi_transform.py), auto-generated type definitions (api_types.py, judgment_types.py), API client methods (api/__init__.py), and the main Prompt class implementation (prompts/prompt.py).

The core feature is a Git-like versioning system for prompts with commit tracking, tagging capabilities, and project association. The Prompt class supports template compilation using double-brace syntax ({{variable}}) converted to Python's Template format. Users can create, fetch, tag, untag, and list prompt versions through both synchronous and asynchronous API methods. The implementation follows the existing codebase patterns and integrates cleanly with the current API client infrastructure.

Changed Files
Filename Score Overview
scripts/openapi_transform.py 5/5 Added 5 new prompt management endpoints to the JUDGEVAL_PATHS list for API generation
scripts/api_generator.py 5/5 Added 5 new prompt-related endpoints to enable client method generation
src/judgeval/api/api_types.py 5/5 Auto-generated TypedDict classes for prompt operations with version control support
src/judgeval/data/judgment_types.py 5/5 Auto-generated Pydantic models for prompt management API endpoints
src/judgeval/scorers/judgeval_scorers/api_scorers/prompt_scorer.py 4/5 Simplified error handling by removing special HTTP 500 error cases
src/judgeval/api/init.py 4/5 Added five new prompt management methods to sync and async API clients
src/judgeval/prompts/prompt.py 4/5 New Prompt dataclass with CRUD operations, versioning, and template compilation

Confidence score: 4/5

  • This PR introduces solid prompt management functionality with minimal risk, but has some implementation concerns that should be addressed
  • Score reflects well-structured code following existing patterns, but missing type annotations and potential KeyError issues prevent a perfect score
  • Pay close attention to src/judgeval/prompts/prompt.py for the missing return type annotation and safe dictionary access patterns

Sequence Diagram

sequenceDiagram
    participant User
    participant Prompt
    participant JudgmentSyncClient
    participant JudgmentAPI
    
    User->>Prompt: "create(project_name, name, prompt, tags)"
    Prompt->>JudgmentSyncClient: "prompts_insert(payload)"
    JudgmentSyncClient->>JudgmentAPI: "POST /prompts/insert/"
    JudgmentAPI-->>JudgmentSyncClient: "PromptInsertResponse"
    JudgmentSyncClient-->>Prompt: "commit_id, parent_commit_id"
    Prompt-->>User: "Prompt instance"
    
    User->>Prompt: "get(project_name, name, commit_id/tag)"
    Prompt->>JudgmentSyncClient: "prompts_fetch(name, project_name, commit_id, tag)"
    JudgmentSyncClient->>JudgmentAPI: "GET /prompts/fetch/"
    JudgmentAPI-->>JudgmentSyncClient: "PromptFetchResponse"
    JudgmentSyncClient-->>Prompt: "prompt_config"
    Prompt-->>User: "Prompt instance"
    
    User->>Prompt: "compile(**kwargs)"
    Prompt->>Prompt: "Template.substitute(**kwargs)"
    Prompt-->>User: "compiled_prompt_string"
    
    User->>Prompt: "tag(project_name, name, commit_id, tags)"
    Prompt->>JudgmentSyncClient: "prompts_tag(payload)"
    JudgmentSyncClient->>JudgmentAPI: "POST /prompts/tag/"
    JudgmentAPI-->>JudgmentSyncClient: "PromptTagResponse"
    JudgmentSyncClient-->>Prompt: "commit_id"
    Prompt-->>User: "commit_id"
Loading

Additional Comments (1)

  1. src/judgeval/scorers/judgeval_scorers/api_scorers/prompt_scorer.py, line 88-98 (link)

    style: Inconsistent error handling - why keep 500 status code special handling here but remove it from push_prompt_scorer and fetch_prompt_scorer?

7 files reviewed, 4 comments

Edit Code Review Agent Settings | Greptile

tag: Optional[str] = None,
judgment_api_key: str = os.getenv("JUDGMENT_API_KEY") or "",
organization_id: str = os.getenv("JUDGMENT_ORG_ID") or "",
):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Missing return type annotation - should specify return type for consistency with other functions

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/judgeval/prompts/prompt.py
Line: 44:44

Comment:
**style:** Missing return type annotation - should specify return type for consistency with other functions

How can I resolve this? If you propose a fix, please make it concise.

_template: Template = field(init=False, repr=False)

def __post_init__(self):
template_str = re.sub(r"\{\{(\w+)\}\}", r"$\1", self.prompt)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: The regex pattern only captures word characters (\w+) - variables with hyphens, dots, or other characters won't be matched

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/judgeval/prompts/prompt.py
Line: 136:136

Comment:
**logic:** The regex pattern only captures word characters (\w+) - variables with hyphens, dots, or other characters won't be matched

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed to more general

"You cannot fetch a prompt by both commit_id and tag at the same time"
)
prompt_config = fetch_prompt(project_name, name, commit_id, tag)
if prompt_config is None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: This check may not work as expected - API responses typically don't return None, they raise exceptions or return empty objects

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/judgeval/prompts/prompt.py
Line: 167:167

Comment:
**logic:** This check may not work as expected - API responses typically don't return None, they raise exceptions or return empty objects

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

returns none for fetch_prompt w/out commit_id or tag (needed for creating prompt on platform website)

Comment on lines 45 to 48
client = JudgmentSyncClient(judgment_api_key, organization_id)
try:
prompt_config = client.prompts_fetch(project_name, name, commit_id, tag)
return prompt_config["commit"]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[CriticalError]

Potential KeyError: fetch_prompt returns prompt_config["commit"] but if the API call succeeds but returns a response without the "commit" key, this will raise a KeyError. The API type shows PromptFetchResponse has commit as NotRequired[Optional[PromptCommitInfo]], meaning it might not be present in the response.

Suggested Change
Suggested change
client = JudgmentSyncClient(judgment_api_key, organization_id)
try:
prompt_config = client.prompts_fetch(project_name, name, commit_id, tag)
return prompt_config["commit"]
def fetch_prompt(
project_name: str,
name: str,
commit_id: Optional[str] = None,
tag: Optional[str] = None,
judgment_api_key: str = os.getenv("JUDGMENT_API_KEY") or "",
organization_id: str = os.getenv("JUDGMENT_ORG_ID") or "",
):
client = JudgmentSyncClient(judgment_api_key, organization_id)
try:
prompt_config = client.prompts_fetch(project_name, name, commit_id, tag)
return prompt_config.get("commit")
except JudgmentAPIError as e:
raise JudgmentAPIError(
status_code=e.status_code,
detail=f"Failed to fetch prompt '{name}': {e.detail}",
response=e.response,
)

Committable suggestion

Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Context for Agents
[**CriticalError**]

Potential KeyError: `fetch_prompt` returns `prompt_config["commit"]` but if the API call succeeds but returns a response without the "commit" key, this will raise a KeyError. The API type shows `PromptFetchResponse` has `commit` as `NotRequired[Optional[PromptCommitInfo]]`, meaning it might not be present in the response.

<details>
<summary>Suggested Change</summary>

```suggestion
def fetch_prompt(
    project_name: str,
    name: str,
    commit_id: Optional[str] = None,
    tag: Optional[str] = None,
    judgment_api_key: str = os.getenv("JUDGMENT_API_KEY") or "",
    organization_id: str = os.getenv("JUDGMENT_ORG_ID") or "",
):
    client = JudgmentSyncClient(judgment_api_key, organization_id)
    try:
        prompt_config = client.prompts_fetch(project_name, name, commit_id, tag)
        return prompt_config.get("commit")
    except JudgmentAPIError as e:
        raise JudgmentAPIError(
            status_code=e.status_code,
            detail=f"Failed to fetch prompt '{name}': {e.detail}",
            response=e.response,
        )
```

⚡ **Committable suggestion**

Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

</details>

File: src/judgeval/prompts/prompt.py
Line: 48

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

commit will always be a key (may be None)

Comment on lines 154 to 179
@classmethod
def get(
cls,
project_name: str,
name: str,
commit_id: Optional[str] = None,
tag: Optional[str] = None,
):
if commit_id is not None and tag is not None:
raise ValueError(
"You cannot fetch a prompt by both commit_id and tag at the same time"
)
prompt_config = fetch_prompt(project_name, name, commit_id, tag)
if prompt_config is None:
raise ValueError(f"Prompt '{name}' not found in project '{project_name}'")
return cls(
name=prompt_config["name"],
prompt=prompt_config["prompt"],
tags=prompt_config["tags"],
commit_id=prompt_config["commit_id"],
parent_commit_id=prompt_config["parent_commit_id"],
metadata={
"creator_first_name": prompt_config["first_name"],
"creator_last_name": prompt_config["last_name"],
},
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[CriticalError]

Potential KeyError in dictionary access: The Prompt.get() method assumes all expected keys exist in prompt_config without validation. If the API response is missing required fields like "name", "prompt", "tags", etc., this will raise a KeyError.

Suggested Change
Suggested change
@classmethod
def get(
cls,
project_name: str,
name: str,
commit_id: Optional[str] = None,
tag: Optional[str] = None,
):
if commit_id is not None and tag is not None:
raise ValueError(
"You cannot fetch a prompt by both commit_id and tag at the same time"
)
prompt_config = fetch_prompt(project_name, name, commit_id, tag)
if prompt_config is None:
raise ValueError(f"Prompt '{name}' not found in project '{project_name}'")
return cls(
name=prompt_config["name"],
prompt=prompt_config["prompt"],
tags=prompt_config["tags"],
commit_id=prompt_config["commit_id"],
parent_commit_id=prompt_config["parent_commit_id"],
metadata={
"creator_first_name": prompt_config["first_name"],
"creator_last_name": prompt_config["last_name"],
},
)
@classmethod
def get(
cls,
project_name: str,
name: str,
commit_id: Optional[str] = None,
tag: Optional[str] = None,
):
if commit_id is not None and tag is not None:
raise ValueError(
"You cannot fetch a prompt by both commit_id and tag at the same time"
)
prompt_config = fetch_prompt(project_name, name, commit_id, tag)
if prompt_config is None:
raise ValueError(f"Prompt '{name}' not found in project '{project_name}'")
# Validate required fields exist
required_fields = ["name", "prompt", "tags", "commit_id", "first_name", "last_name"]
for field in required_fields:
if field not in prompt_config:
raise ValueError(f"Invalid API response: missing required field '{field}'")
return cls(
name=prompt_config["name"],
prompt=prompt_config["prompt"],
tags=prompt_config["tags"],
commit_id=prompt_config["commit_id"],
parent_commit_id=prompt_config.get("parent_commit_id"),
metadata={
"creator_first_name": prompt_config["first_name"],
"creator_last_name": prompt_config["last_name"],
},
)

Committable suggestion

Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Context for Agents
[**CriticalError**]

Potential KeyError in dictionary access: The `Prompt.get()` method assumes all expected keys exist in `prompt_config` without validation. If the API response is missing required fields like "name", "prompt", "tags", etc., this will raise a KeyError.

<details>
<summary>Suggested Change</summary>

```suggestion
    @classmethod
    def get(
        cls,
        project_name: str,
        name: str,
        commit_id: Optional[str] = None,
        tag: Optional[str] = None,
    ):
        if commit_id is not None and tag is not None:
            raise ValueError(
                "You cannot fetch a prompt by both commit_id and tag at the same time"
            )
        prompt_config = fetch_prompt(project_name, name, commit_id, tag)
        if prompt_config is None:
            raise ValueError(f"Prompt '{name}' not found in project '{project_name}'")
        
        # Validate required fields exist
        required_fields = ["name", "prompt", "tags", "commit_id", "first_name", "last_name"]
        for field in required_fields:
            if field not in prompt_config:
                raise ValueError(f"Invalid API response: missing required field '{field}'")
        
        return cls(
            name=prompt_config["name"],
            prompt=prompt_config["prompt"],
            tags=prompt_config["tags"],
            commit_id=prompt_config["commit_id"],
            parent_commit_id=prompt_config.get("parent_commit_id"),
            metadata={
                "creator_first_name": prompt_config["first_name"],
                "creator_last_name": prompt_config["last_name"],
            },
        )
```

⚡ **Committable suggestion**

Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

</details>

File: src/judgeval/prompts/prompt.py
Line: 179

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should exist

Comment on lines 191 to 208
@classmethod
def list(cls, project_name: str, name: str):
prompt_configs = list_prompt(project_name, name)["versions"]
return [
cls(
name=prompt_config["name"],
prompt=prompt_config["prompt"],
tags=prompt_config["tags"],
commit_id=prompt_config["commit_id"],
parent_commit_id=prompt_config["parent_commit_id"],
metadata={
"creator_first_name": prompt_config["first_name"],
"creator_last_name": prompt_config["last_name"],
"created_at": prompt_config["created_at"],
},
)
for prompt_config in prompt_configs
]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[BestPractice]

Potential KeyError in dictionary access: The Prompt.list() method assumes all expected keys exist in each prompt_config item without validation. If any API response item is missing required fields, this will raise a KeyError.

Suggested Change
Suggested change
@classmethod
def list(cls, project_name: str, name: str):
prompt_configs = list_prompt(project_name, name)["versions"]
return [
cls(
name=prompt_config["name"],
prompt=prompt_config["prompt"],
tags=prompt_config["tags"],
commit_id=prompt_config["commit_id"],
parent_commit_id=prompt_config["parent_commit_id"],
metadata={
"creator_first_name": prompt_config["first_name"],
"creator_last_name": prompt_config["last_name"],
"created_at": prompt_config["created_at"],
},
)
for prompt_config in prompt_configs
]
@classmethod
def list(cls, project_name: str, name: str):
prompt_configs = list_prompt(project_name, name)["versions"]
result = []
for prompt_config in prompt_configs:
# Validate required fields exist
required_fields = ["name", "prompt", "tags", "commit_id", "first_name", "last_name", "created_at"]
for field in required_fields:
if field not in prompt_config:
raise ValueError(f"Invalid API response: missing required field '{field}' in prompt version")
result.append(cls(
name=prompt_config["name"],
prompt=prompt_config["prompt"],
tags=prompt_config["tags"],
commit_id=prompt_config["commit_id"],
parent_commit_id=prompt_config.get("parent_commit_id"),
metadata={
"creator_first_name": prompt_config["first_name"],
"creator_last_name": prompt_config["last_name"],
"created_at": prompt_config["created_at"],
},
))
return result

Committable suggestion

Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Context for Agents
[**BestPractice**]

Potential KeyError in dictionary access: The `Prompt.list()` method assumes all expected keys exist in each `prompt_config` item without validation. If any API response item is missing required fields, this will raise a KeyError.

<details>
<summary>Suggested Change</summary>

```suggestion
    @classmethod
    def list(cls, project_name: str, name: str):
        prompt_configs = list_prompt(project_name, name)["versions"]
        result = []
        
        for prompt_config in prompt_configs:
            # Validate required fields exist
            required_fields = ["name", "prompt", "tags", "commit_id", "first_name", "last_name", "created_at"]
            for field in required_fields:
                if field not in prompt_config:
                    raise ValueError(f"Invalid API response: missing required field '{field}' in prompt version")
            
            result.append(cls(
                name=prompt_config["name"],
                prompt=prompt_config["prompt"],
                tags=prompt_config["tags"],
                commit_id=prompt_config["commit_id"],
                parent_commit_id=prompt_config.get("parent_commit_id"),
                metadata={
                    "creator_first_name": prompt_config["first_name"],
                    "creator_last_name": prompt_config["last_name"],
                    "created_at": prompt_config["created_at"],
                },
            ))
        
        return result
```

⚡ **Committable suggestion**

Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

</details>

File: src/judgeval/prompts/prompt.py
Line: 208

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should exist

@justinsheu justinsheu changed the title Add Prompt dataclass with initial methods Add Prompt dataclass with initial methods (JUD-2082) Oct 16, 2025
Copy link

linear bot commented Oct 16, 2025

Copy link
Contributor Author

@adivate2021 adivate2021 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

tag: Optional[str] = None,
judgment_api_key: str = os.getenv("JUDGMENT_API_KEY") or "",
organization_id: str = os.getenv("JUDGMENT_ORG_ID") or "",
):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Comment on lines 288 to 290
assert prompt_list[0].prompt == "version 3", "First prompt should be version 1"
assert prompt_list[1].prompt == "version 2", "Second prompt should be version 2"
assert prompt_list[2].prompt == "version 1", "Third prompt should be version 3"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[BestPractice]

The assertion messages here are misleading. The test correctly asserts that the prompts are listed from newest to oldest (v3, v2, v1), but the failure messages state the opposite order. Updating them will improve clarity if this test fails in the future.

Suggested change
assert prompt_list[0].prompt == "version 3", "First prompt should be version 1"
assert prompt_list[1].prompt == "version 2", "Second prompt should be version 2"
assert prompt_list[2].prompt == "version 1", "Third prompt should be version 3"
assert prompt_list[0].prompt == "version 3", "First prompt in list should be the latest (version 3)"
assert prompt_list[1].prompt == "version 2", "Second prompt in list should be version 2"
assert prompt_list[2].prompt == "version 1", "Third prompt in list should be the oldest (version 1)"

Committable suggestion

Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Context for Agents
[**BestPractice**]

The assertion messages here are misleading. The test correctly asserts that the prompts are listed from newest to oldest (v3, v2, v1), but the failure messages state the opposite order. Updating them will improve clarity if this test fails in the future.

```suggestion
    assert prompt_list[0].prompt == "version 3", "First prompt in list should be the latest (version 3)"
    assert prompt_list[1].prompt == "version 2", "Second prompt in list should be version 2"
    assert prompt_list[2].prompt == "version 1", "Third prompt in list should be the oldest (version 1)"
```

⚡ **Committable suggestion**

Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

File: src/e2etests/test_prompts.py
Line: 290

def prompts_fetch(
self,
name: str,
project_name: Optional[str] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why does this require both? can we make the server endpoint only require project_Id? and we can resolve it locally and cache it once.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also can we make project_id only not optional

def prompts_get_prompt_versions(
self,
name: str,
project_id: Optional[str] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here can we make only project_Id mantatory

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

name: str,
project_id: Optional[str] = None,
project_name: Optional[str] = None,
get_user_avatars: Optional[str] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is get_user_avatars that doesnt seem like a flag that should live tied to this function in the backend

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

name: str,
prompt: str,
tags: List[str],
judgment_api_key: str = os.getenv("JUDGMENT_API_KEY") or "",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(nit) Can we use the JUDGMENT_API_KEY and JUDGMENT_ORG_ID variables from env.py everywhere in this file

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed

Copy link

✔️ Propel has finished reviewing this change.

Copy link
Contributor

@justinsheu justinsheu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm (from @adivate2021 and @abhishekg999)

@justinsheu justinsheu merged commit 18755fc into staging Oct 21, 2025
15 of 19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants